Goto

Collaborating Authors

 certainty factor


ff4ERA: A new Fuzzy Framework for Ethical Risk Assessment in AI

Dyoub, Abeer, Letteri, Ivan, Lisi, Francesca A.

arXiv.org Artificial Intelligence

The emergence of Symbiotic AI (SAI) introduces new challenges to ethical decision-making as it deepens human-AI collaboration. As symbiosis grows, AI systems pose greater ethical risks, including harm to human rights and trust. Ethical Risk Assessment (ERA) thus becomes crucial for guiding decisions that minimize such risks. However, ERA is hindered by uncertainty, vagueness, and incomplete information, and morality itself is context-dependent and imprecise. This motivates the need for a flexible, transparent, yet robust framework for ERA. Our work supports ethical decision-making by quantitatively assessing and prioritizing multiple ethical risks so that artificial agents can select actions aligned with human values and acceptable risk levels. We introduce ff4ERA, a fuzzy framework that integrates Fuzzy Logic, the Fuzzy Analytic Hierarchy Process (FAHP), and Certainty Factors (CF) to quantify ethical risks via an Ethical Risk Score (ERS) for each risk type. The final ERS combines the FAHP-derived weight, propagated CF, and risk level. The framework offers a robust mathematical approach for collaborative ERA modeling and systematic, step-by-step analysis. A case study confirms that ff4ERA yields context-sensitive, ethically meaningful risk scores reflecting both expert input and sensor-based evidence. Risk scores vary consistently with relevant factors while remaining robust to unrelated inputs. Local sensitivity analysis shows predictable, mostly monotonic behavior across perturbations, and global Sobol analysis highlights the dominant influence of expert-defined weights and certainty factors, validating the model design. Overall, the results demonstrate ff4ERA ability to produce interpretable, traceable, and risk-aware ethical assessments, enabling what-if analyses and guiding designers in calibrating membership functions and expert judgments for reliable ethical decision support.


An Expert System to Diagnose Spinal Disorders

Dashti, Seyed Mohammad Sadegh, Dashti, Seyedeh Fatemeh

arXiv.org Artificial Intelligence

Objective: Until now, traditional invasive approaches have been the only means being leveraged to diagnose spinal disorders. Traditional manual diagnostics require a high workload, and diagnostic errors are likely to occur due to the prolonged work of physicians. In this research, we develop an expert system based on a hybrid inference algorithm and comprehensive integrated knowledge for assisting the experts in the fast and high-quality diagnosis of spinal disorders. Methods: First, for each spinal anomaly, the accurate and integrated knowledge was acquired from related experts and resources. Second, based on probability distributions and dependencies between symptoms of each anomaly, a unique numerical value known as certainty effect value was assigned to each symptom. Third, a new hybrid inference algorithm was designed to obtain excellent performance, which was an incorporation of the Backward Chaining Inference and Theory of Uncertainty. Results: The proposed expert system was evaluated in two different phases, real-world samples, and medical records evaluation. Evaluations show that in terms of real-world samples analysis, the system achieved excellent accuracy. Application of the system on the sample with anomalies revealed the degree of severity of disorders and the risk of development of abnormalities in unhealthy and healthy patients. In the case of medical records analysis, our expert system proved to have promising performance, which was very close to those of experts. Conclusion: Evaluations suggest that the proposed expert system provides promising performance, helping specialists to validate the accuracy and integrity of their diagnosis. It can also serve as an intelligent educational software for medical students to gain familiarity with spinal disorder diagnosis process, and related symptoms.


A Generic Knowledge Based Medical Diagnosis Expert System

Huang, Xin, Tang, Xuejiao, Zhang, Wenbin, Pei, Shichao, Zhang, Ji, Zhang, Mingli, Liu, Zhen, Chen, Ruijun, Huang, Yiyi

arXiv.org Artificial Intelligence

Expert system can process large amounts of known information and apply reasoning capabilities to provide conclusions. An expert system is a system that employs human knowledge captured in an automated system to solve problems that typically require human expertise. In this paper we propose the design and development of a medical knowledge based system (MKBS) for disease diagnosis from symptoms. It provides rich features for searching properties like symptoms, treatments, hierarchical clusters of particular diseases. The system supports a knowledge construction module and an inference engine module. The knowledge construction was built on a concept of rules, which was represented in a tree structure, and properties of a particular disease were stored as a semantic net.


Techniques and Methodology

AI Magazine

Department of Computer Science Carnegae-Mellon Unaverszty P&burg, PA 15213 Editors' Note: Many expert systems require some means of handling heuristic rules whose conclusions are less than certain Baysian techniques and other numerical scoring methods have been developed to combine and propagate certainty measures as the expert system draws inferences in solving different problems. Doyle's paper argues that it is difficult for a human expert to produce reliable probabilities or numerical scoring factors for an inference rule, and that a radically different approach to the problem should be considered He essentially suggests that the expert be encouraged to think in terms of specific instances which would conflict with the general rule and to encode this knowledge explicitly. Methodologically this seems to be very appealing, and helps to make both explicit and rigorous some of the techniques currently used by knowledge engineers whm they encode and refine the expert's knowledge We would welcome comments and criticisms of this approach from those steeped in the practical issues of constructing large rule-based expert systems. Probabilistic rules and their variants have recently supported several successful applications of expert systems, in spite of the difficulty of committing informants to particular conditional probabilities or "certainty factors," and in spite of the experimentally observed insensitivity of system performance to perturbations of the chosen values Here we survey recent developments concerning reasoned assumptions which offer hope for avoiding the practical elusiveness of probabilistic rules while retaining theoretical power, for basing systems on the information unhesitatingly gained from expert informants, and reconstructing the entailed degrees of belief later @


Eighth Workshop on the Validation and Verification of Knowledge-Based Systems

AI Magazine

The Workshop on the Validation and Verification of Knowledge-Based Systems gathers researchers from government, industry, and academia to present the most recent information about this important development aspect of knowledge-based systems (KBSs). The 1995 workshop focused on nontraditional KBSs that are developed using more than just the simple rule-based paradigm. This new focus showed how researchers are adjusting to the shift in KBS technology from stand-alone rulebased expert systems to embedded systems that use object-oriented technology, uncertainty, and nonmonotonic reasoning. In "Specification Refinement of Object-Oriented KBSs," A. Vermesan (Foundation for Research in Economics and Business Administration, Norway) looks at KBSs that perform reasoning in a framework of structured objects. Her approach is to verify that as details are added to the specification of a KBS, these additions are consistent with the initial abstract specification.


Knowledge Verification Base

AI Magazine

He points out that one of the key features these systems lack is "a suitable verification methodology or a technique for testing the consistency and completeness of a rule set." It is precisely this feature that we address here. LES is a generic rule-based expert system building tool (Laffey, Perkins, and Nguyen 1986) similar to EMYCIN (Van Melle 1981) that has been used as a framework to construct expert systems in many areas, such as electronic equipment diagnosis, design verification, photointerpretation, and hazard analysis. LES represents factual data in its frame database and heuristic and control knowledge in its production rules. LES allows the knowledge engineer to use both data-driven and goaldriven rules.


Probabilistic Reasoning and Certainty Factors

AI Classics

The. development of automated assistance for medical diagnosis and decision making is an area of both theoretical and practical interest. Of methods for utilizing evidence to select diagnoses or decisions, probability theory has the firmest appeal. Probability theory in the form of Bayes' Theorem has been used by a number of" workers (Ross, 1972). Notable among recent developments are those of de Dombal and coworkers (de Dombal, 1973; de Dombal et al., 1974; 1975) and Pipberger and coworkers (Pipberger et al., 1975). The usefulness of Bayes' Theorem is limited practical difficulties, principally the lack of data adequate to estimate accurately the a priori and conditional probabilities used in the theorem. One attempt to mitigate this problem has been to assume statistical independence among various pieces of evidence. How seriously this approximation affects results is often unclear, and correction mechanisms have been explored (Ross, 1972; Norusis and Jacquez, 1975a; 1975b). Even the independence assumption requires an unmanageable number of estimates of" probabilities for most applications with realistic complexity.


Ambiguity-Driven Fuzzy C-Means Clustering: How to Detect Uncertain Clustered Records

Ghaffari, Meysam, Ghadiri, Nasser

arXiv.org Artificial Intelligence

As a well-known clustering algorithm, Fuzzy C-Means (FCM) allows each input sample to belong to more than one cluster, providing more flexibility than non-fuzzy clustering methods. However, the accuracy of FCM is subject to false detections caused by noisy records, weak feature selection and low certainty of the algorithm in some cases. The false detections are very important in some decision-making application domains like network security and medical diagnosis, where weak decisions based on such false detections may lead to catastrophic outcomes. They are mainly emerged from making decisions about a subset of records that do not provide enough evidence to make a good decision. In this paper, we propose a method for detecting such ambiguous records in FCM by introducing a certainty factor to decrease invalid detections. This approach enables us to send the detected ambiguous records to another discrimination method for a deeper investigation, thus increasing the accuracy by lowering the error rate. Most of the records are still processed quickly and with low error rate which prevents performance loss compared to similar hybrid methods. Experimental results of applying the proposed method on several datasets from different domains show a significant decrease in error rate as well as improved sensitivity of the algorithm.


Modular Belief Updates and Confusion about Measures of Certainty in Artificial Intelligence Research

Horvitz, Eric J., Heckerman, David

arXiv.org Artificial Intelligence

Over the last decade, there has been growing interest in the use or measures or change in belief for reasoning with uncertainty in artificial intelligence research. An important characteristic of several methodologies that reason with changes in belief or belief updates, is a property that we term modularity. We call updates that satisfy this property modular updates. Whereas probabilistic measures of belief update - which satisfy the modularity property were first discovered in the nineteenth century, knowledge and discussion of these quantities remains obscure in artificial intelligence research. We define modular updates and discuss their inappropriate use in two influential expert systems.


A Knowledge Engineer's Comparison of Three Evidence Aggregation Methods

Mitchell, Donald H., Harp, Steven A., Simkin, David K.

arXiv.org Artificial Intelligence

The comparisons of uncertainty calculi from the last two Uncertainty Workshops have all used theoretical probabilistic accuracy as the sole metric. While mathematical correctness is important, there are other factors which should be considered when developing reasoning systems. These other factors include, among other things, the error in uncertainty measures obtainable for the problem and the effect of this error on the performance of the resulting system. There are some domains in which many of the interesting conditional probabilities can be objectively estimated. For example, census data allows various characterizations of individuals with a reasonable degree of confidence.